13 research outputs found

    Dynamic escape game

    Get PDF
    We introduce Dynamic Escape Game (DEC), a tool that provides emergency evacuation plans in situations where some of the escape paths may become unavailable at runtime. We formalize the setting as a reachability two-player turn-based game where the universal player has the power of inhibiting at runtime some moves to the existential player. Thus, the universal player can change the structure of the game arena along a play. DEC uses a graphical interface to depict the game and displays a winning play whenever it exists

    Enablingmarkovian representations under imperfect information

    Get PDF
    Markovian systems are widely used in reinforcement learning (RL), when the successful completion of a task depends exclusively on the last interaction between an autonomous agent and its environment. Unfortunately, real-world instructions are typically complex and often better described as non-Markovian. In this paper we present an extension method that allows solving partially-observable non-Markovian reward decision processes (PONMRDPs) by solving equivalent Markovian models. This potentially facilitates Markovian-based state-of-the-art techniques, including RL, to find optimal behaviours for problems best described as PONMRDP. We provide formal optimality guarantees of our extension methods together with a counterexample illustrating that naive extensions from existing techniques in fully-observable environments cannot provide such guarantees

    Approximating perfect recall when model checking strategic abilities

    Get PDF
    We investigate the notion of bounded recall in the context of model checking ATL ∗ and ATL specifications in multi- agent systems with imperfect information. We present a novel three-valued semantics for ATL ∗ , respectively ATL , under bounded recall and imperfect information, and study the cor- responding model checking problems. Most importantly, we show that the three-valued semantics constitutes an approxi- mation with respect to the traditional two-valued semantics. In the light of this we construct a sound, albeit partial, al- gorithm for model checking two-valued perfect recall via its approximation as three-valued bounded recall

    Give Me a Hand: How to Use Model Checking for Multi-Agent Systems to Help Runtime Verification and Vice Versa

    No full text
    In this paper, we review the history of model checking and runtime verification on multi-agent systems by recalling the results obtained in the two research areas. Then, we present some past, present and future directions to combine these techniques in the two possible sides, that is by using model checking for multi-agent systems to solve runtime verification problems and vice versa

    Natural strategic ability under imperfect information

    No full text
    Strategies in game theory and multi-agent logics are mathematical objects of remarkable combinatorial complexity Recently, the concept of natural strategies has been proposed to model more human-like reasoning about simple plans and their outcomes So far, the theory of such simple strategic play was only considered in scenarios where all the agents have perfect information about the state of the game In this paper, we extend the notion of natural strategies to games with imperfect information We also show that almost all the complexity results for model checking carry over from the perfect to imperfect information setting That is, verification of natural strategies is usually no more complex for agents with uncertainty This tells games of natural strategic ability clearly apart from most results in game theory and multi-agent logics. © 2019 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org) Ail rights reserved

    Reasoning about natural strategic ability

    No full text
    In game theory, as well as in the semantics of game logics, a strategy can be represented by any function from states of the game to the agent's actions. That makes sense from the mathematical point of view, but not necessarily in the context of human behavior. This is because humans are quite bad at executing complex plans, and also rather unlikely to come up with such plans in the first place. In this paper, we adopt the view of bounded rationality, and look only at "simple" strategies in specifications of agents' abilities. We formally define what "simple" means, and propose a variant of alternating-Time temporal logic that takes only such strategies into account. We also study the model checking problem for the resulting semantics of ability

    Hiding actions in multi-player games

    No full text
    In the game-Theoretic approach to reasoning about multi-Agent systems, imperfect information plays a key role. It requires that players act in accordance with the information available to them. The complexity of deciding games that have imperfect information is generally worse than those that do not have imperfect information, and is easily undecidable. In many real-life scenarios, however, we just have to deal with very restricted forms of imperfect information and limited interactions among players. In these settings, the challenge is to come up with elementary decision procedures, as we do. We study multi-player concurrent games where (i) Player0's objective is to reach a target W, and (ii) the opponents are trying to stop this but have partial observation about Player0's actions. We study the problem of deciding whether the opponents can prevent Player0 to reach W, by beating every Player0's strategy. We show, using an automata-Theoretic approach that, assuming the opponents have the same partial observation and play under uniformity, the problem is in ExpTime

    Verifying Strategic Abilities in Multi-agent Systems with Private-Data Sharing

    No full text
    corecore